skip to main content
Article Podcast Report Summary Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube

Search Results

Your search for Larry Lewis found 38 results.

Improving Protection of Humanitarian Organizations in Armed Conflict
/reports/2022/03/improving-protection-of-humanitarian-organizations-in-armed-conflict
We leverage CNA’s work on civilian harm, including attacks on humanitarian organizations, to identify four broad steps militaries can take to minimize these tragic incidents and improve protection of these organizations.
, but such suffering can be reduced. This report serves as a starting point for developing more ambitious and comprehensive solutions toward that goal. Larry Lewis /reports/2022/03
Leveraging AI to Mitigate Civilian Harm
/reports/2022/02/leveraging-ai-to-mitigate-civilian-harm
We have found that AI can be used to help address patterns of harm and thus reduce the likelihood of harm. We discuss some areas of focus militaries could prioritize in order to reduce risks to civilians.
to reduce risks to civilians overall. Larry Lewis Andrew Ilachinski /reports/2022/02/Leveraging-AI-to-Mitigate-Civilian-Harm.pdf /reports/2022/02/Leveraging-AI-to-Mitigate-Civilian-HarmCover.jpg
Putting Innovation into Practice
/reports/2020/09/putting-innovation-into-practice
Innovation is a key enabling concept in the 2018 National Defense Strategy. Not only does the US military need to continue to maintain effectiveness in military operations, but in the face of a new competitive environment, and the increasing importance of commercial technology, the US will need to practice innovation to maintain a military edge and meet national security goals. The critical role of innovation is repeated throughout the NDS. But what can the US do to pursue effective innovation? And what is innovation anyway? We examine innovation through consideration of specific military examples—both historical and contemporary—as well as examining academic literature and past CNA products addressing innovation. After developing a functional definition of innovation, we provide best practices and principles that DOD can apply in order to put innovation into practice.
practices and principles that DOD can apply in order to put innovation into practice, summarized in the figure below. Larry Lewis David Knoll /reports/2020/09/DOP-2020-U-027949-Final.pdf /reports/2020/09
Promoting Civilian Protection during Security Assistance
/reports/2019/05/promoting-civilian-protection-during-security-assistance
For four years, the United States provided the Saudi-led coalition with military equipment and assistance used in its campaign in Yemen. During that time, the US has wrestled with and debated both the legality and wisdom of its support. After four years of conflict in Yemen, the US should be asking: what lessons can be learned from four years of support to the Saudi-led coalition? In light of the significant civilian protection concerns seen in Yemen, is there a way to get better outcomes from security assistance activities? This report aims to answer those questions. We analyze US support to the Saudi-led coalition and identify two gaps in policy and information, respectively. We also examine the timely issue of better protecting health care in the midst of armed conflict. In this report, we provide a policy framework for including civilian protection considerations as part of security assistance.
requirements involving civilian protection: IHL compliance and Leahy vetting; it will serve as an additional layer of protection and does not detract from existing policies. Larry Lewis /reports/2019/05
AI Safety Navy Action Plan
/reports/2019/10/ai-safety-navy-action-plan
In light of the Navy’s stated commitment to using AI, and given the strategic importance of AI safety, we provide the Navy with a first step towards a comprehensive approach to safety. We use a risk management approach to frame our treatment of AI safety risks: identifying risks, analyzing them, and suggesting concrete actions for the Navy to begin addressing them. The first type of safety risk, being technical in nature, will require a collaborative effort with industry and academia to address. The second type of risk, associated with specific military missions, can be addressed in a combination of military experimentation, research, and concept development to find ways to promote effectiveness along with safety. For each types of risk, we use examples to show concrete ways of managing and reducing the risk of AI applications. We then discuss institutional changes that would help promote safety in the Navy’s AI efforts.
for the second type— to show concrete ways of managing and reducing the risk of AI applications. We then discuss institutional changes that would help promote safety in the Navy’s AI efforts. Larry Lewis /reports/2019/10/DOP-2019-U-021957-1Rev.pdf /reports/2019/10/DOP-2019-U-021957-1Rev.png /reports/2019/10/keep%20calm.png Special Activities and Intelligence /centers-and-divisions/cna/ow
Redefining Human Control
/reports/2018/03/redefining-human-control
This report examines the issue of human control with regard to lethal autonomy, an issue of significant interest in United Nations discussions in the Convention on Certain Conventional Weapons (CCW) forum. We analyze this issue in light of lessons and best practices from recent U.S. operations. Based on this analysis, we make the case for a wider framework for the application of human control over the use of force. This report recommends that CCW discussions currently focusing on process considerations, such as human control, should instead focus on outcome—namely, mitigation of inadvertent engagements. This allows consideration of a more complete set of benefits and risks of lethal autonomy and better management of risks. The report also describes best practices that can collectively serve as a safety net for the use of lethal autonomous weapons. It concludes with concrete recommendations for how the international community can more effectively address the risk of inadvertent engagements from lethal autonomy.
. Designing systems to follow a set of rules—specifically, International Humanitarian Law—is necessary but not sufficient in itself for addressing this risk through a comprehensive safety net. Larry Lewis /reports/2018/03/DOP-2018-U-017258-Final.pdf /reports/2018/03/DOP-2018-U-017258-Final_Page_01.jpg /reports/2018/03/thinker.jpg Cyber Research Program /centers-and-divisions/cna/special-programs
ai with ai: Terminator or Data? Policy and Safety for Autonomous Weapons
/our-media/podcasts/ai-with-ai/season-1/1-39
This week Andy and Dave take a respite from the world of AI. In the meantime, Larry Lewis hosts Shawn Steene from the Office of Secretary of Defense. Shawn manages DOD Directive 3000.09 – US military policy on autonomous weapons – and is a member of the US delegation to the UN’s CCW meetings on Lethal Autonomous Weapon Systems (LAWS). Shawn and Larry discuss U.S. policy, what DOD Directive 3000.09 actually means, and how the future of AI could more closely resemble the android data than SKYNET from the Terminator movies. That leads to a discussion of some common misconceptions about artificial intelligence and autonomy in military applications, and how these misconceptions can manifest themselves in the UN talks. With data having single-handedly saved the day in the eighth and tenth Star Trek movies (First Contact and Nemesis, respectively), perhaps Star Trek should be required viewing for the next UN meeting in Geneva. Larry Lewis  is the Director of the  Center for Autonomy and Artificial Intelligence  at CNA. His areas of expertise include lethal autonomy, reducing civilian casualties, identifying lessons from current operations, security assistance, and counterterrorism.
1-39 This week Andy and Dave take a respite from the world of AI. In the meantime, Larry Lewis hosts Shawn Steene from the Office of Secretary of Defense. Shawn manages DOD Directive 3000.09 – US military policy on autonomous weapons – and is a member of the US delegation to the UN’s CCW meetings on Lethal Autonomous Weapon Systems (LAWS). Shawn and Larry discuss U.S. policy, what DOD Directive ... and tenth Star Trek movies (First Contact and Nemesis, respectively), perhaps Star Trek should be required viewing for the next UN meeting in Geneva. Larry Lewis  is the Director of the  Center
cna talks: Civilian Harm Mitigation is a Moral Imperative and a Strategic Priority
/our-media/podcasts/cna-talks/2024/02/civilian-harm-mitigation-is-a-moral-imperative-and-a-strategic-priority
The DOD Civilian Harm Mitigation and Response Action Plan (CHMR-AP) represents a significant step forward for global efforts to reduce civilian harm. The plan recognizes that reducing civilian harm is not just a moral imperative but a strategic priority. It lays out concrete steps that the Department of Defense can take to mitigate civilian harm caused by its operations.     Larry Lewis, Marla Keenan, and Sabrina Verleysen join John Stimpson in this episode. They discuss the CHMR-AP and the decades of work on civilian harm mitigation that made it possible. 
to mitigate civilian harm caused by its operations.     Larry Lewis, Marla Keenan, and Sabrina Verleysen join John Stimpson in this episode. They discuss the CHMR-AP and the decades of work on civilian harm mitigation that made it possible.  Civilian Harm Mitigation is a Moral Imperative and a Strategic Priority Biographies Dr. Larry Lewis   is a Principal Research Scientist in CNA’s Operations Division. Dr. Lewis spearheaded the first data-based approach to protecting civilians in conflict by analyzing military operational data in conjunction with open-source data. He has worked
ai with ai: How I Learned to Stop Worrying and Love AI
/our-media/podcasts/ai-with-ai/season-1/1-44
The Director for CNA’s Center for Autonomy and AI, Dr. Larry Lewis, joins Dave for a discussion on understanding and mitigating the risks of using autonomy and AI in war. They discuss some of the commonly voiced risks of autonomy and AI, in the application for war, but also in general application, which includes: AI will destroy the world; AI and lethal autonomy are unethical; lack of accountability; and lack of discrimination. Having examined the underpinnings of these commonly voiced risks, Larry and Dave move on to practical descriptions and identifications of risks for use of AI and autonomy in war, including the context of military operations, the supporting institutional development (including materiel, training, and test & evaluation), as well as the law and policy that govern their use. They wrap up with a discussion about the current status of organizations and thought leaders in the Department of Defense and the Department of the Navy.
1-44 The Director for CNA’s Center for Autonomy and AI, Dr. Larry Lewis, joins Dave for a discussion on understanding and mitigating the risks of using autonomy and AI in war. They discuss some of the commonly voiced risks of autonomy and AI, in the application for war, but also in general application, which includes: AI will destroy the world; AI and lethal autonomy are unethical; lack of accountability; and lack of discrimination. Having examined the underpinnings of these commonly voiced risks, Larry and Dave move on to practical descriptions and identifications of risks for use of AI
ai with ai: U.N. Convention on Conventional Weapons, Part II
/our-media/podcasts/ai-with-ai/season-1/1-6b
Dr. Larry Lewis   joins Andy and Dave to discuss the U.N. Convention on Conventional Weapons, which met in mid-November with a "mandate to discuss" the topic of lethal autonomous weapons. Larry provides an overview of the group's purpose, the group's schedule and discussions, the mood and reaction of various parts of the group, and what the next steps might be.
1-6B Dr. Larry Lewis   joins Andy and Dave to discuss the U.N. Convention on Conventional Weapons, which met in mid-November with a "mandate to discuss" the topic of lethal autonomous weapons. Larry provides an overview of the group's purpose, the group's schedule and discussions, the mood and reaction of various parts of the group, and what the next steps might be. U.N. Convention on Conventional Weapons, Part II TOPICS November 13-17 meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts (GGE) on lethal autonomous weapons systems (86 countries